The Baldwin Effect in Advancing Generalizability of Physics-Informed Neural Networks (2312.03243v2)
Abstract: Physics-informed neural networks (PINNs) are at the forefront of scientific machine learning, making possible the creation of machine intelligence that is cognizant of physical laws and able to accurately simulate them. However, today's PINNs are often trained for a single physics task and require computationally expensive re-training for each new task, even for tasks from similar physics domains. To address this limitation, this paper proposes a pioneering approach to advance the generalizability of PINNs through the framework of Baldwinian evolution. Drawing inspiration from the neurodevelopment of precocial species that have evolved to learn, predict and react quickly to their environment, we envision PINNs that are pre-wired with connection strengths inducing strong biases towards efficient learning of physics. A novel two-stage stochastic programming formulation coupling evolutionary selection pressure (based on proficiency over a distribution of physics tasks) with lifetime learning (to specialize on a sampled subset of those tasks) is proposed to instantiate the Baldwin effect. The evolved Baldwinian-PINNs demonstrate fast and physics-compliant prediction capabilities across a range of empirically challenging problem instances with more than an order of magnitude improvement in prediction accuracy at a fraction of the computation cost compared to state-of-the-art gradient-based meta-learning methods. For example, when solving the diffusion-reaction equation, a 70x improvement in accuracy was obtained while taking 700x less computational time. This paper thus marks a leap forward in the meta-learning of PINNs as generalizable physics solvers. Sample codes are available at \url{https://github.com/chiuph/Baldwinian-PINN}.
- Cuomo, S., Di Cola, V.S., Giampaolo, F., Rozza, G., Raissi, M., Piccialli, F.: Scientific machine learning through physics–informed neural networks: Where we are and what’s next. Journal of Scientific Computing 92(3), 88 (2022) https://doi.org/10.1007/s10915-022-01939-z Karniadakis et al. [2021] Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physics-informed machine learning. Nature Reviews Physics 3(6), 422–440 (2021) https://doi.org/10.1038/s42254-021-00314-5 Raissi et al. [2020] Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020) Cai et al. [2021a] Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physics-informed neural networks (pinns) for fluid mechanics: A review. Acta Mechanica Sinica 37(12), 1727–1738 (2021) Cai et al. [2021b] Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Cai, S., Wang, Z., Wang, S., Perdikaris, P., Karniadakis, G.E.: Physics-informed neural networks for heat transfer problems. Journal of Heat Transfer 143(6) (2021) Huang and Wang [2022] Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Huang, B., Wang, J.: Applications of physics-informed neural networks in power systems-a review. IEEE Transactions on Power Systems (2022) de Wolff et al. [2021] Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wolff, T., Carrillo, H., Martí, L., Sanchez-Pi, N.: Assessing physics informed neural networks in ocean modelling and climate change applications. In: AI: Modeling Oceans and Climate Change Workshop at ICLR 2021 (2021) Wong et al. [2022] Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wong, J.C., Ooi, C., Gupta, A., Ong, Y.-S.: Learning in sinusoidal spaces with physics-informed neural networks. IEEE Transactions on Artificial Intelligence (2022) https://doi.org/10.1109/TAI.2022.3192362 Chiu et al. [2022] Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Chiu, P.-H., Wong, J.C., Ooi, C., Dao, M.H., Ong, Y.-S.: Can-pinn: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Computer Methods in Applied Mechanics and Engineering 395, 114909 (2022) https://doi.org/10.1016/j.cma.2022.114909 Wong et al. [2023] Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wong, J.C., Chiu, P.-H., Ooi, C., Dao, M.H., Ong, Y.-S.: Lsa-pinn: Linear boundary connectivity loss for solving pdes on complex geometry. In: 2023 International Joint Conference on Neural Networks (IJCNN), pp. 1–10 (2023). https://doi.org/10.1109/IJCNN54540.2023.10191236 Wang et al. [2023] Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wang, S., Sankaran, S., Wang, H., Perdikaris, P.: An expert’s guide to training physics-informed neural networks. arXiv preprint arXiv:2308.08468 (2023) Wong et al. [2021] Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wong, J.C., Gupta, A., Ong, Y.-S.: Can transfer neuroevolution tractably solve your differential equations? IEEE Computational Intelligence Magazine 16(2), 14–30 (2021) https://doi.org/10.1109/MCI.2021.3061854 Krishnapriyan et al. [2021] Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Krishnapriyan, A., Gholami, A., Zhe, S., Kirby, R., Mahoney, M.W.: Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems 34, 26548–26560 (2021) Penwarden et al. [2023] Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Penwarden, M., Zhe, S., Narayan, A., Kirby, R.M.: A metalearning approach for physics-informed neural networks (pinns): Application to parameterized pdes. Journal of Computational Physics 477, 111912 (2023) https://doi.org/10.1016/j.jcp.2023.111912 Downing [2012] Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Downing, K.L.: Heterochronous neural baldwinism. In: Artificial Life Conference Proceedings, pp. 37–44 (2012). Citeseer Simpson [1953] Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Simpson, G.G.: The baldwin effect. Evolution 7(2), 110–117 (1953) Powell [2014] Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Powell, W.B.: Clearing the jungle of stochastic optimization. In: Bridging Data and Decisions, pp. 109–137. Informs, Catonsville, Maryland, USA (2014) Bakker et al. [2020] Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Bakker, H., Dunke, F., Nickel, S.: A structuring review on multi-stage optimization under uncertainty: Aligning concepts from theory and practice. Omega 96, 102080 (2020) Fernando et al. [2018] Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Fernando, C., Sygnowski, J., Osindero, S., Wang, J., Schaul, T., Teplyashin, D., Sprechmann, P., Pritzel, A., Rusu, A.: Meta-learning by the baldwin effect. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO ’18, pp. 1313–1320. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3205651.3208249 . https://doi.org/10.1145/3205651.3208249 Stanley et al. [2019] Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Stanley, K.O., Clune, J., Lehman, J., Miikkulainen, R.: Designing neural networks through neuroevolution. Nature Machine Intelligence 1(1), 24–35 (2019) Miikkulainen and Forrest [2021] Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Miikkulainen, R., Forrest, S.: A biological perspective on evolutionary computation. Nature Machine Intelligence 3(1), 9–15 (2021) Hornik et al. [1989] Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural networks 2(5), 359–366 (1989) Chen and Chen [1995] Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Chen, T., Chen, H.: Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE transactions on neural networks 6(4), 911–917 (1995) Zhang et al. [2012] Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Zhang, R., Lan, Y., Huang, G.-b., Xu, Z.-B.: Universal approximation of extreme learning machine with adaptive growth of hidden nodes. IEEE Transactions on Neural Networks and Learning Systems 23(2), 365–371 (2012) https://doi.org/10.1109/TNNLS.2011.2178124 Dong and Li [2021] Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Dong, S., Li, Z.: Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Computer Methods in Applied Mechanics and Engineering 387, 114129 (2021) https://doi.org/10.1016/j.cma.2021.114129 Dong and Yang [2022] Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Dong, S., Yang, J.: On computing the hyperparameter of extreme learning machines: Algorithm and application to computational pdes, and comparison with classical and high-order finite elements. Journal of Computational Physics 463, 111290 (2022) https://doi.org/10.1016/j.jcp.2022.111290 Gupta and Ong [2018] Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Gupta, A., Ong, Y.-S.: Memetic Computation: the Mainspring of Knowledge Transfer in a Data-driven Optimization Era vol. 21. Springer, Switzerland (2018) Hansen [2016] Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Hansen, N.: The cma evolution strategy: A tutorial. arXiv preprint arXiv:1604.00772 (2016) Bradbury et al. [2018] Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Bradbury, J., Frostig, R., Hawkins, P., Johnson, M.J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., VanderPlas, J., Wanderman-Milne, S., et al.: Jax: composable transformations of python+ numpy programs (2018) Tang et al. [2022] Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Tang, Y., Tian, Y., Ha, D.: Evojax: Hardware-accelerated neuroevolution. arXiv preprint arXiv:2202.05008 (2022) Stynes [2005] Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Stynes, M.: Steady-state convection-diffusion problems. Acta Numerica 14, 445–508 (2005) Van Erp et al. [2020] Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Van Erp, R., Soleimanzadeh, R., Nela, L., Kampitsis, G., Matioli, E.: Co-designing electronics with microfluidics for more sustainable cooling. Nature 585(7824), 211–216 (2020) Patankar [1980] Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Patankar, S.V.: Numerical Heat Transfer and Fluid Flow. Hemisphere Publishing Corporation, New York, NY, USA (1980) Gupta [2013] Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Gupta, A.: Numerical modelling and optimization of non-isothermal, rigid tool liquid composite moulding processes. PhD thesis, ResearchSpace@ Auckland (2013) Bar-Sinai et al. [2019] Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Bar-Sinai, Y., Hoyer, S., Hickey, J., Brenner, M.P.: Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences 116(31), 15344–15349 (2019) Brandstetter et al. [2022] Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Brandstetter, J., Worrall, D., Welling, M.: Message passing neural pde solvers. arXiv preprint arXiv:2202.03376 (2022) Bec and Khanin [2007] Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Bec, J., Khanin, K.: Burgers turbulence. Physics reports 447(1-2), 1–66 (2007) Rao et al. [2023] Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Rao, C., Ren, P., Wang, Q., Buyukozturk, O., Sun, H., Liu, Y.: Encoding physics to learn reaction–diffusion processes. Nature Machine Intelligence 5(7), 765–779 (2023) Meng et al. [2020] Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Meng, X., Li, Z., Zhang, D., Karniadakis, G.E.: Ppinn: Parareal physics-informed neural network for time-dependent pdes. Computer Methods in Applied Mechanics and Engineering 370, 113250 (2020) Wang et al. [2022] Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Wang, S., Sankaran, S., Perdikaris, P.: Respecting causality is all you need for training physics-informed neural networks. arXiv preprint arXiv:2203.07404 (2022) Subramanian et al. [2023] Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Subramanian, S., Harrington, P., Keutzer, K., Bhimji, W., Morozov, D., Mahoney, M., Gholami, A.: Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. arXiv preprint arXiv:2306.00258 (2023) Suganthan and Katuwal [2021] Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Suganthan, P.N., Katuwal, R.: On the origins of randomization-based feedforward neural networks. Applied Soft Computing 105, 107239 (2021) https://doi.org/10.1016/j.asoc.2021.107239 Gallicchio and Scardapane [2020] Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Gallicchio, C., Scardapane, S.: Deep randomized neural networks. In: Recent Trends in Learning From Data: Tutorials from the INNS Big Data and Deep Learning Conference (INNSBDDL2019), pp. 43–68 (2020). https://doi.org/10.1007/978-3-030-43883-8_3 . Springer Baydin et al. [2018] Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Baydin, A.G., Pearlmutter, B.A., Radul, A.A., Siskind, J.M.: Automatic differentiation in machine learning: a survey. Journal of Marchine Learning Research 18, 1–43 (2018) Anderson et al. [2020] Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Anderson, D., Tannehill, J.C., Pletcher, R.H., Munipalli, R., Shankar, V.: Computational Fluid Mechanics and Heat Transfer. CRC press, Boca Raton, Florida, USA (2020) Ollivier et al. [2017] Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: A unifying picture via invariance principles. The Journal of Machine Learning Research 18(1), 564–628 (2017) Chiu [2018] Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Chiu, P.-H.: An improved divergence-free-condition compensated method for solving incompressible flows on collocated grids. Computers &\&& Fluids 162, 39–54 (2018) https://doi.org/10.1016/j.compfluid.2017.12.005 Leonard [1991] Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Leonard, B.P.: The ultimate conservative difference scheme applied to unsteady one-dimensional advection. Computer Methods in Applied Mechanics and Engineering 88, 17–74 (1991) https://doi.org/10.1016/0045-7825(91)90232-U Gottlieb and Shu [1998] Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2 Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2
- Gottlieb, S., Shu, C.-W.: Total variation diminishing runge-kutta schemes. Mathematics of Computation 67, 73–85 (1998) https://doi.org/10.1090/S0025-5718-98-00913-2